OPENSTACK BUGSMASH

Hi

I am back with new OpenStack stories; march 7 and 8 were marked as  OpenStack BugSmash days; it was a global event, in India it was sponsered by Intel, Bangalore. I was one of the organizers; I had been tracking the event for a very long time and trying to contribute to it so that it becomes a great success.

The idea of the Bug Smash was to involve more contribution in the community and advance in the targets of the next release. It was an amazing opportunity to learn and discuss from the developers who are other developers who are in the community for while and mentor the newbies.

The first day was spent with analyzing the problems and how to approach towards it’s solution, it involved discussion with the peers and contacting the developer community; I was involved with OpenStack searchlight community which required tests to be written for their Nova plugin; with little effort and help from the other developers, I wrote a patch set; to my utter disappointment, there were issues which were not due to the patch I wrote. So, I asked  the community  on IRC for help.

8

Second day involved a lot of discussion, it turned out that there were some errors that were supposed to be fixed to proceed further. They helped me out to correct  the code and add my changes to them. Finally, I submitted the patch which is still a work in progress because it involves a lot more contribution. The patch can be tracked here.

It was an amazing event and I had to fly back on the same day, because of limited time I had, but it was an extremely educating experience.

Thanks to my mentor Nikhil Komawar who pushed me forward to organize such a fine event and the entire Intel community to host the event and make a huge success of it.

OpenStack Summit, Tokyo

Hi folks, Today I will be telling you about Openstack summit, Tokyo.

As you all know, that I was Outreachy intern at OpenStack in Round May-August 2015, being a part of the community has helped me to hone my skills a lot at an overall level. OpenStack is a large and a very humble community which organizes a summit in every six months for their subsequent releases and this time it was  in Tokyo.

Openstack community is very kind to provide me with the travel grant which covered most of the expenses that incurred; I am grateful to them for providing me such a great opportunity. Attending this summit was a life changing experience for me, it was extremely educating.

Be it the activity driven workshops for women or the design summit, it provided me with the opportunity to learn from amazing workforce of Openstack.

openstack

Women oriented workshops taught me to be confident about myself and face challenges gracefully. The design summit taught me tactics to improve my skills. In my internship, I was a part of the glance community, so I attended corresponding sessions, but apart from these, I also got involved into a new project called Searchlight which is a search utility provided in openstack based on elastic search, this project was suggested to me by my mentor so that I advance myself a little more in Openstack.

In between the sessions, whatever time I used to get, I utilized it in networking and exploring market spaces where many companies working in Openstack like RedHat, Rackspace, Mirantis, etc had their booths placed, talking to them about openstack informed me of many new things. some booths were humble enough to provide me with books also, which can help me understand Openstack at a deeper level.

During evenings, I tried attending other events which include RedHat’s event for job opportunities where I met Amber, a RedHat employee. She is a very nice person and informed me about how can I apply.

Openstack is the best thing that has happened to me so far. I really thank the board members and outreachy to provide with such a great opportunity.

OpenStack awareness camp and PyDelhi at JNU

Hi everyone!

I am back with some more interesting events from my journey at OpenStack; well, my internship is soon coming to an end; so, I decided that I should share my experience with communities I am locally associated with. This week I attended the PyDelhi meetup at Jawahar Lal University, Delhi; and  Sajid Akhtar and Syed Armani were asked to aware  PyDelhi with OpenStack; well, the amazing part here is; I was asked to give a flash talk about my experience at OpenStack and believe me, it was one fine experience. The arrangements were made by  Vipin Mittal, an  M.Tech student at JNU and the Pydelhi community, and kudos to them for that. Following the PyDelhi’s customs, the event started with a workshop on Kivy by Akshay, which was very informative.
pydelhi1

Sajid Akhtar and Syed Armani were the key speakers, both of them are Open Stack enthusiast and are  promoting Open Stack services in India at a large scale. They informed the crowd with basics of OpenStack, its architecture and other basic details. pydelhi3 pydelhi2

After that, it was time for me to pitch in, I explained about what my project was about and then, what was I working on.

pydelhi4

This was my first time addressing a crowd at that scale and I learned a lot. Now I feel ready to move forward and contribute to the OpenStack community at next level. 😀

You can find the slides I shared, here.

Thanks  for passing by.

Glance-Scrubber

Glance-scrubber is a utility for the Image Service that cleans up images that have been deleted; its configuration is stored in the glance-scrubber.conf file.

Images in Glance can be in one the following statuses:

  • queued
  • saving
  • active
  • killed
  • deleted
  • pending_delete

In Glance-Scrubber, what we are concerned is with the deleted and pending_delete status. The image  deleted can take up some amount of time and memory, so it might make other processes a little slow, this is when the pending_delete status comes into picture, it basically deletes the image after some time and is equivalent to delayed delete. The images with the status pending_delete gets deleted after a specific scrub time; and once the image is set with pending_delete status as true, the image is no longer recoverable.

To use scrubber from command line, there is a specific command ‘glance-scrubber’. But for that we need to set up configuration files correctly. By default Dev-Stack does not clones scrubber.conf with glance repository. So, we need to declare it first.

The configuration files are stored in the uppermost directory, if you are using ubuntu,

then do

cd /etc/glance

Various configuration files can be seen here, we can use any text editor to create a new configuration file, name it glance-scrubber.conf and paste the following

[DEFAULT]

# Show more verbose log output (sets INFO log level output)

#verbose = False

# Show debugging output in logs (sets DEBUG log level output)

#debug = False

# Log to this file. Make sure you do not set the same log file for both the API

# and registry servers!

#

# If `log_file` is omitted and `use_syslog` is false, then log messages are

# sent to stdout as a fallback.

log_file = /var/log/glance/scrubber.log

# Send logs to syslog (/dev/log) instead of to file specified by `log_file`

#use_syslog = False

# Should we run our own loop or rely on cron/scheduler to run us

daemon = False

# Loop time between checking for new items to schedule for delete

wakeup_time = 300

# Directory that the scrubber will use to remind itself of what to delete

# Make sure this is also set in glance-api.conf

scrubber_datadir = /var/lib/glance/scrubber

# Only one server in your deployment should be designated the cleanup host

cleanup_scrubber = False

# pending_delete items older than this time are candidates for cleanup

cleanup_scrubber_time = 86400

# Address to find the registry server for cleanups

registry_host = 0.0.0.0

# Port the registry server is listening on

registry_port = 9191

# Auth settings if using Keystone

# auth_url = http://127.0.0.1:5000/v2.0/

# admin_tenant_name = %SERVICE_TENANT_NAME%

# admin_user = %SERVICE_USER%

# admin_password = %SERVICE_PASSWORD%

# Directory to use for lock files. Default to a temp directory

# (string value). This setting needs to be the same for both

# glance-scrubber and glance-api.

#lock_path=<None>

# API to use for accessing data. Default value points to sqlalchemy

# package, it is also possible to use: glance.db.registry.api

#data_api = glance.db.sqlalchemy.api

# ================= Security Options ==========================

# AES key for encrypting store ‘location’ metadata, including

# — if used — Swift or S3 credentials

# Should be set to a random string of length 16, 24 or 32 bytes

#metadata_encryption_key = <16, 24 or 32 char registry metadata key>

# ================= Database Options ===============+==========

[database]

# The SQLAlchemy connection string used to connect to the

# database (string value)

#connection=sqlite:////glance/openstack/common/db/$sqlite_db

# The SQLAlchemy connection string used to connect to the

# slave database (string value)

#slave_connection=

# timeout before idle sql connections are reaped (integer

# value)

#idle_timeout=3600

# Minimum number of SQL connections to keep open in a pool

# (integer value)

#min_pool_size=1

# Maximum number of SQL connections to keep open in a pool

# (integer value)

#max_pool_size=<None>

# maximum db connection retries during startup. (setting -1

# implies an infinite retry count) (integer value)

#max_retries=10

# interval between retries of opening a sql connection

# (integer value)

#retry_interval=10

# If set, use this value for max_overflow with sqlalchemy

# (integer value)

#max_overflow=<None>

# Verbosity of SQL debugging information. 0=None,

# 100=Everything (integer value)

#connection_debug=0

# Add python stack traces to SQL as comment strings (boolean

# value)

#connection_trace=false

# If set, use this value for pool_timeout with sqlalchemy

# (integer value)

#pool_timeout=<None>

We can use various flavors which can vary from trusted-auth, keystone, no- auth. etc

In my particular case, I had to use no-auth to make the scrubber work with some code in the client.py commented out, because the scrubber code still needs some improvement which is being worked upon. To comment it out,go to  

/opt/stack/glance/glance/common/client.py

and comment out line 368 and 369. 

Voila, it works !!

Thanks for reading, I will soon make some more entries 😀

Introduction to Ansible

Ansible is a server configuration, management, orchestration and deployment tool. With ansible you can manage the configuration of your servers quite easily; orchestrate it i.e you can ask one server to do one thing and then do something with another; deploy it i.e uploading your code on to the server.

Ansible is very simple to get started but it has a lot of power;It is agentless, but uses SSH and requires python on servers. SSH is used to make all the connections and necessary configuration changes. It favors pushing configurations, so you have a workstation and run tasks on those servers; configuration is done in YAML which is yet another markup language. Some set of tasks in playbooks run on a set of hosts.

Host Inventory

For example, if you have a list of hosts you want to manage with Ansible, these can be organized into groups to make it easier to manage multiple servers; within the inventory, you can set ports or override other connection settings. It can use flat files or a plugin to communicate with cloud providers like AWS or Rackspace.

E.g of inventory file

[WebServers] ←group

web1.example.com

web2.example.com

we can also have groups within the group.

Playbooks

Playbook is essentially a script of tasks to run against a set of hosts; It contains plays and plays contains tasks; tasks call modules which do all the work.

Plays

Plays are a set of tasks and are typically applied against a set of hosts

ansible

apt, template in above are modules

Tasks and Modules

Tasks call modules to alter some configurations on a server. Changes are made in an idempotent manner i.e you can run and get the task multiple times without changing the state.

There are over 200 modules provided by Ansible; you can write your own too. Modules do everything (e.g install packages, run commands, manage services, copy template files, mount drivers etc).

Handlers

Handlers are tasks that get run after certain triggers. They are always run at the end of a Play and are only run once, no matter how many times they have been triggered. For. example, after configuring Apache or nginx, you can set up a handler to restart or reload the service so your changes take effect.

Variables, Templates and Facts

  • Variables allow you to easily change your configuration for different environments.
  • Template allows you to copy configuration files and update certain sections using variables, templates use jinja2
  • Facts are information collected about each server in your inventory; e.g ip address, memory, disk space etc

We can use the facts to help with server configuration or apply with settings to templates; they can be used if one server communicates with other.

Getting started with Tasks API in Glance

Glance’s role in Open Stack

Glance provides a service where users can upload and discover data assets that are meant to be used with other services like images of Nova and Templates of Heat.

What are images?

A virtual machine image abbreviated as ‘image’ is a single file which contains a virtual disk that has a bootable operating system installed on it.

OUTLINE

End users would like to bring their own images into your cloud but there are some complications associated with it.

COMPLICATIONS

  1. Some end users might not understand what the OpenStack image service is.
    • People might not be aware of what format an image is of for eg. someone might want to upload a JPEG but from JPEG one cannot boot a VM.
  2. Some end users know what image service is but they intentionally upload malicious images with other users in hope of putting the program to backdoors.
  3. Some end users may upload a malicious image to try and attack the hypervisor itself which is possible and unsavory.
  4. Some end users have really slow connections and images.
    • Usually it takes some time for images to upload and lots of slow long running uploads can tie up the image service.
    • Image service is kind of important for nova as it uses image service to boot VM.

How to get information back to users?

Image status field is not very descriptive and since the uploaded thing might not actually be a VM image, so the question is; do we really want to create an image which is actually not an image?

One suggestion is that; you can just do the normal image create process; it would just be queued and eventually it will reach and realize it’s not an image and kill it; it would be frustrating to wait for so long and not getting it right.

There is a need to find a way for end users so that

  1. The uploaded data  can be verified as VM image.
  2. The image can be scanned for malware or exploits.
  3. Uses an interface that is common across OpenStack installations.
  4. Even after image upload, it is yet customizable because different clouds uses different technologies and different hypervisors, etc
  5. Using customizable exploit is something like you learn about a new exploit and put something in it without waiting for the 6 months cycle for liberty.
  6. It should provide useful feedback to the end user but the key thing is that it would be nice some cover for the image i.e message that your image is killed or something if upload fails. So we need a feedback.
  7. End users may want to download images to move them to another cloud for various reasons.
  8. A provider may want to pre-process an image before it is handed over to the end user.
  9. End user may have a slow connection that may slow down glance connections.

Glance is dealing with another long running asynchronous image related activity, this area also needs be handled.

Role of Glance Tasks

  1. It provides a common API across open Stack installations .
  2. Workflow of actual operation is customizable per cloud provider.
  3. Doesn’t creates images until there is a high probability of success.
  4. Provides a way to deliver meaningful helpful error messages, this depends on how right you are doing things.Point is, there is a way to do it and however nice the exploration is built into these things, so that the task object that you create once it reaches the final state getting expire at tag and is not configurable.
  5. It free’s the normal upload/download path for trusted users.

Creating the tasks using import method

To use the Tasks API, we need to use cURL, cURL is a software package which consists of command line tool and a library for transferring data using URL syntax. “cURL” supports various protocols like, DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP.

First of all, we need to source  the  openrc, for that do, source openrc admin admin.

Well, the fastest way to get a curl command is by using command glance –debug image-list , this will give us a list of images with a curl command, something like this-

curl -g -i -X GET -H ‘Accept-Encoding: gzip, deflate’ -H ‘Accept: */*’ -H ‘User-Agent: python-glanceclient’ -H ‘Connection: keep-alive’ -H ‘X-Auth-Token: {SHA1}74fe76ffe176eb077ef4975a2df56c6af0fa5f28’ -H ‘Content-Type: application/octet-stream’ http://10.0.2.15:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20

 The above can also be written as

curl -g -i \
-H ‘Accept-Encoding: gzip, deflate’ \
-H ‘Accept: */*’ \
-H ‘User-Agent: python-glanceclient’ \
-H ‘Connection: keep-alive’ \
-H ‘X-Auth-Token:   709d7c24021d4366b8c3df04ce44e3e6’ \
-H ‘Content-Type: application/json’ \
-d ‘{“type”: “import”, “input”: {“import_from”: “https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img&#8221;, “import_from_format”: “raw”, “image_properties” : {“name”: “test-conversion-1”, “container_format”: “bare”}}}’ \
-X POST http://10.0.2.15:9292/v2/tasks

  • -H stands for the headers and these are necessary for the proper functioning of cURL.
  • -X is for the HTTP method, it can be GET, POST, DELETE or PUT.
  • -d stands for data and is, the data can be in the JSON, XML, etc, data is supposed to be wrapped in single quotes.

The curl command we got from glance  debug image-list has –X POST in the beginning, but it doesn’t matters; it’s just that, the above cURL command is more organized and nothing else.

The authorized token above is different for each user and is valid for specific amount of time, so we need to get a authorized token every time we try to use any service of Open Stack. For authorization do keystone token-get , copy paste the token id in the Auth-Token header.

In the header, Content-Type  , replace “octet stream”  by  “json”  because tasks API uses json schema.  The schema looks something like this

{ “type”: “import”,

“input”: {

“import_from”: “swift://cloud.foo/myaccount/mycontainer/path”,

“import_from_format”: “qcow2”,

“image_properties” : {

“name”: “GreatStack 1.22”,

“tags”: [“lamp”, “custom”]

}

}

}

The data is to be wrapped in single quotes and in single quotes and is  a “Json” schema. Then comes the –X   which means the HTTP method which in this case  is a POST .

Cheers!!

REST API’s

REST refers to Representational state transfer
flowchart

Rest basically means using current features of web in simple and effective way.
Features of Web :
Old and widely accepted HTTP protocol.
This HTTP protocol has 4 methods – GET, POST, PUT and DELETE
HTTP protocol is stateless.
URI- Uniform resource identifier via which we locate any service on the web.

REST takes these principles from web and applies certain principles to them and that is what is called as REST; It is nothing but architectural style built on certain principles using the current HTTP “Web” fundamentals.
There are five basic fundamentals in HTTP web which we are leveraged to create web services.

1st principle-Everything is a resource

If we talk about internet, it is all about getting data; data can be in the form of web page, an image, video file etc; it can also be a dynamic output i.e customer data, news subscription etc.The best way to see REST is seeing data as a resource, be it be a physical file, image etc ..
Example of resources with URI
http://www.GeetikaBatra.com/image/logo.gif (Image Resource)
http://www.GeetikaBatra.com/customer/100 (Dynamically pulled resource)
http://www.GeetikaBatra.com/videos/0001 (Video resource)
http://www.GeetikaBatra.com/home.html (Static resource)
Or any other data we want from internet is a resource from REST point of view.

2nd Principle-Every resource is identified by a unique identifier

It identifies all the resources from URI. for example if we want to display a customer with the orders, we can use http://www.GeetikaBatra.com/DisplayCustomerAndOrder.aspx. In REST, there is one more constraint to the URI, every URI should uniquely represent any resource state, in other words; if I want to get a details of a customer named “Shiv”; I can use http://www.GeetikaBatra.com/customer/Shiv .
Rather than pointing to the generic page like http://www.GeetikaBatra.com/DisplayCustomerAndOrder.aspx , we can actually point to a unique identifier and can retrieve a unique data.

3rd Principle-simplification
3rdprinci

It uses a simple and uniform interface; simplification is a way to success i.e the goal of REST is to have uniform interfaces. When external clients are interacting with my service; they expect a simplified interface. For e.g If I have a customer that orders data, which I have exposed on the web and these are some function names by which external client can communicate
Add Customer
Insert orders
Select Customer
Get orders
Delete Customer
Remove orders
Update customer

The above methods seem to be inconsistent and difficult to remember, this is what REST says; keep your interfaces uniform and simple. This can be achieved by uniform method of HTTP protocol and combining the name with resource operation. In HTTP protocol, there are lots of common methods e.g creating, updating, deleting etc.

Method     Description

GET          Gets a resource
PUT          Creates and updates a resource
POST        Submit data to resource
DELETE    Deletes a resource

The above are traditional methods and if we can get these traditional methods and combine with our resource, then our interface can be made uniform. If we combine the HTTP methods to the REST uniform API, we can have it better, for example if a customer is to be added, then we can send HTTP PUT request to REST i.e customer/shiv.
Uniform interfaces is nothing but your old combination of HTTP’s POST, GET, PUT, DELETE with our resource names. By doing this, we can achieve a simplified communication.

4th Principle-Everything is a Representation
4thprinci

All communications are done via representations, example if there is a client, a server and inside the server, there are resources; whenever the client sends some data to server and server sends back a response, everything is done via representations, so any kind of req and response which goes from the client to the server and back again is nothing but representations.
If you want to make a new customer record, what should you do?

GeetikaBatra.com

http://www.GeetikaBatra.com/customer/orders

Delhi

Above is an example of representation.This can be done by HTTP PUT. Once the resource is created, you will receive something like this. Following representation is a response of above.

GeetikaBatra.com

http://www.GeetikaBatra.com/customer/orders

Delhi

Formats can vary, it can be JSON or executable format etc.

All the requests and responses from client to server are called as representations.

5th Principle-Be Stateless
Every request should be an independent request, so that we can scale up using load balance etc. Independent requests means, with the data, you also send the state of the request, so that with the service, we can carry forward the same for the next level.

GeetikaBatra.com

ghsgja#45445

Above representation sends user ID and password to a system so that it can be validated. we may get this back.

true

Let us say that we want to go and search a customer. Now, we will send a representation and we will also say that we are currently logged in successfully and do not again ask us the user ID and password. So we will set something like flag

All

true

In other words, every request is independent i.e it is stateless.

The Meetup!

Openstack Hola Folks! I am back with some more updates. This week I attended a meetup in nearby city Gurgaon and it was one amazing experience.The meetup was organized by Sajid Akhtar and Syed Armani, both of them are Open Stack enthusiast and are  promoting Open Stack services in India at a large scale. They started this journey three years back and now they have been able to create a community of 4000 members in India, which is an amazing count in itself. And the best part is that, they are managing these events free of cost. Well, this was my first meetup and it was stupendously amazing to meet so many new people who already use Open Stack and others who were willing to use. The arrangements were done by Hp Helion team in HP centre Gurgaon, and kudos to them for that. Syed and Sajid shared some of their best experiences with me and I really appreciate their effort on that; though there were not much developers there; actually I was the only one but since I have just commenced my journey in Open Stack, it was an educational experience in all. The talk started with the welcome note by hosts and Organizers where they made us aware about the Enterprise adoption of Open stack and the challenges associated with it. These include least downtime, lower running costs, greater reliability, cleaner safer factory etc. Their motive is to change the computing paradigm i.e transition of servers from “ Pets to Cattles” . Pets Vs Cattle Analogy Consider an organization of 300 employees and you need to keep a track of it, for this you will certainly buy servers, firewalls, malware protection, etc. If you upload your software, it is very dear to you, it is probably like a pet to you, but at the end of the day maintaining these servers and firewalls is a workload. Now, the situation has changed, the organizations are moving fast, everything is increasing at a supersonic speed; so, we need a faster adoption of technology without stressing about the underlying hardware . With cloud technology evolving, this pet has become cattle now. One doesn’t need to worry about how behind the scenes will be managed and focus on the service they want to provide. After this Ramit Surana, a final year student from Pilani presented some points on the Open Stack Swift which was followed by Sajid Akhtar who raised up concerns of not having enough use cases of how will Open Stack work and reference architectures. Possible use cases of Open Stack are

  • Basic laas as a service
  • Enterprise dropbox
  • Dev-Ops environment
  • Migration of workloads from public clouds to In-Premise
  • Storage as a service
  • Mailing solutions, E learning, office productivity/collaboration
  • Analytics as a service
  • Scale-out application hosting

Next Agenda on the list was Orchestration of Open Stack. Syed Armani discussed some fine details about it. First of all he discussed about the need of orchestration, for example, In earlier days just to deploy a dataset, we needed a physical box, but  now virtual technology is used. So, if a customer has an application and you need it in your cloud. what would you do? My first guess is writing shell scripts, but these are very lengthy. Second option is Json, but instead, we can use templates of HEAT and glance Images which are preprogrammed. One more advantage of using it that if we need a Lampstack i.e 50 virtual machines, with traditional methods, we will need to deploy all 50 separately, but using Orchestration engine, that can be done in one go.  On and on, it was one fine experience to attend this meetup, But what I am concerned about is the number of women as compared to men in Open Stack, I was the only one female participant who attended this meetup. This is really very shocking, We really need to work hard on getting more women in the computing world.

And the excitement is at its pinnacle!

Hello folks! I am back with some more updates. In my last post, I talked about my acceptance to Outreachy in organization Open Stack. Well, I have commenced my internship now and want to share my experience with you all.My project is basically about Tasks Scrubbers; first of all I would separate the two words, “Tasks” and “Scrubbers”; to understand any project; it is necessary to understand the concept of what it demands and  basically what it it’s all about.

So, to understand these two powerful words, I seeked guidance from my mentor Nikhil Komawar and he suggested me to read tasks wiki and accepted queries with open arms; he also suggested me to use search engine for it as in open source, it is the best way to find what you are looking for; thanks to Open Stack Foundation  for uploading videos from the recent Vancouver Summit and all the speakers especially Brian Rosmaita; I understood the exact concept of tasks in glance. After discussing this with my mentor Nikhil Komawar, I was sure that I am moving ahead in right path.

The next step was to edit the Tasks wiki, so I added what all I understood about tasks in the wiki, and then I had it review from Nikhil Komawar and Brian Rosamaita; they suggested some minor changes but otherwise they were cool with it.

Now, I am trying to use the devstack installation to create some tasks and try to set some custom metadata and regular data.

Well, for the upcoming weekend, I will be attending a meetup in nearby city, and I am super excited for it. 😀

Thanks for passing by, keep checking for more updates. Till then Cya.

Outreachy it is!!

Hello!

Welcome Back! Today I will be talking about FOSS and Outreachy; Free and Open Source Software (FOSS) is software that gives the user the freedom to use, copy, study, change, and improve it. There are many Free and Open Source Software licenses under which software can be released with these freedoms.FOSS contributors do various things: software development, system administration, user interface design, graphic design, documentation, community management,marketing, identifying issues and reporting bugs, helping users, event organization, and translations.Outreachy, earlier known as Outreach Program for Women is a program inspired in many ways by Google Summer Of Code and how few women applied for it in the past.The GNOME Foundation first started the internships program with one round in 2006, and then resumed the effort in 2010 with rounds organized twice a year.For the May 2015 round, the program was renamed to Outreachy with the goal of expanding to engage people from various underrepresented groups and was moved to Software Freedom Conservancy as its organizational home.

 I am a programming enthusiast and open source is the best source to learn code in real time environment,enhancing skill set as well as to understand the latest update in the technology world. I learned about GSOC and while searching more on the internet to get a good brief of GSOC I ran into Outreachy. Thanks to the great people at Outreachy; I feel welcomed and encouraged to apply. Also, it has helped me learn,grow and gain knowledge in Open Stack;

people here have been extremely supportive and it was all I could I ask for; sharing experiences of some of the most talented minds is what motivated me to work further.

Being new at all of this, community members Victoria Martínez de la Cruz  (vkmc) and Shaifali Agrawal helped me with getting things started from scratch, they also helped me with the way bugs are categorized and then how to work with  them; they also gave me pointers to get handy with devstack and understanding how Openstack works. Since DevStack is a standard script to quickly create an OpenStack development environment; I followed this great advice and successfully submitted my first patch; the link to the patch is bug/1411806 .

 After my first contribution, confidence in me was a level up; and things actually started to make sense. The community members asked me to chose a project from the openstack ideas list, doing so I ran into project Glance and its mentor Nikhil Komawar(nikhil_k); talking to him made me realize that this was the project I wanna work for; he helped me to solve one of its bugs. Also, I  communicated with him regularly and tried to learn as much as I can. Finally the Outreachy applications were announced and I started working on my application. Under the guidance of Nikhil Komawar, I prepared a rough draft of my application. He reviewed it thoroughly, suggested and made some changes. I also had it reviewed from Victoria Martínez de la Cruz.

The results were announced on 27th April 2015 and I was selected; It was a very satisfactory moment and I felt a sense of pride in me. From the very first day I started planning and communicating with other members of the project glance. The first meeting I attended was on May 9, 2015 and I felt amazing to be a part of it, learned the general code of conduct of meetings which is one important thing in any profession and  tweeted for the same.

Before 25th May, I wish to create and maintain a decent blog as suggested by my mentor; well, I think I should strike that task, since I have already made one 😉 . After this I will start working on specifications based on my mentor’s suggestions about the tasks of the project. Meanwhile I will try to attend more and more meetings and be more responsive in the IRC channel.

Thanks for passing by, I will soon write again! 😀